138 research outputs found

    Towards concept identification using a knowledge-intensive approach

    Get PDF
    This paper presents a method for identifying concepts in microposts and classifying them into a predefined set of categories. The method relies on the DBpedia knowledge base to identify the types of the concepts detected in the messages. For those concepts that are not classified in the ontology we infer their types via the ontology properties which characterise the type

    PPROC, an ontology for transparency in'public procurement

    Get PDF
    Public procurement or tendering refers to the process followed by public authorities for the procurement of goods and services. In most developed countries, the law requires public authorities to provide online information to ensure competitive tendering as far as possible, for which the adequate announcement of tenders is an essential requirement. In addition, transparency laws being proposed in such countries are making the monitoring of public contracts by citizens a fundamental right. This paper describes the PPROC ontology, which has been developed to give support to both processes, publication and accountability, by semantically describing public procurement processes and contracts. The PPROC ontology is extensive, since it covers not only the usual data about the tender, its objectives, deadlines and awardees, but also details of the whole process, from the initial contract publication to its termination. This makes it possible to use the ontology for both open data publication purposes and for the overall management of the public contract procurement process

    WebODE: An integrated workbench for ontology representation, reasoning, and exchange

    Full text link
    We present WebODE as a scalable, integrated workbench for ontological engineering that eases the modelling of ontologies, the reasoning with ontologies and the exchange of ontologies with other ontology tools and ontology-based applications. We will first describe the WebODE's knowledge model. We will then describe its extensible architecture, focusing on the set of independent ontology development functionalities that are integrated in this framework, such as the Ontology Editor, the Axiom Builder, the OKBC-based inference engine, and the documentation and interoperability services

    Identifying Topics in Social Media Posts using DBpedia

    Get PDF
    This paper describes a method for identifying topics in text published in social media, by applying topic recognition techniques that exploit DBpedia. We evaluate such method for social media in Spanish and we provide the results of the evaluation performed

    A Semantic Grid Oriented to E-Tourism

    Full text link
    With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.Comment: 12 PAGES, 7 Figure

    Benchmarking RDF Storage Engines

    Get PDF
    In this deliverable, we present version V1.0 of SRBench, the first benchmark for Streaming RDF engines, designed in the context of Task 1.4 of PlanetData, completely based on real-world datasets. With the increasing problem of too much streaming data but not enough knowledge, researchers have set out for solutions in which Semantic Web technologies are adapted and extended for the publishing, sharing, analysing and understanding of such data. Various approaches are emerging. To help researchers and users to compare streaming RDF engines in a standardised application scenario, we propose SRBench, with which one can assess the abilities of a streaming RDF engine to cope with a broad range of use cases typically encountered in real-world scenarios. We offer a set of queries that cover the major aspects of streaming RDF engines, ranging from simple pattern matching queries to queries with complex reasoning tasks. To give a first baseline and illustrate the state of the art, we show results obtained from implementing SRBench using the SPARQLStream query-processing engine developed by UPM

    Citizen science to monitor light pollution – a useful tool for studying human impacts on the environment

    Get PDF
    Citizen science, the active participation of the public in scientific research projects, is a rapidly expanding field in open science and open innovation. It provides an integrated model of public knowledge production and engagement with science. As a growing worldwide phenomenon, it is invigorated by evolving new technologies that connect people easily and effectively with the scientific community. Catalysed by citizens’ wishes to be actively involved in scientific processes, as a result of recent societal trends, it also offers contributions to the rise in tertiary education. In addition, citizen science provides a valuable tool for citizens to play a more active role in sustainable development. This book identifies and explains the role of citizen science within innovation in science and society, and as a vibrant and productive science-policy interface. The scope of this volume is global, geared towards identifying solutions and lessons to be applied across science, practice and policy. The chapters consider the role of citizen science in the context of the wider agenda of open science and open innovation, and discuss progress towards responsible research and innovation, two of the most critical aspects of science today

    FunMap: Efficient Execution of Functional Mappings for Knowledge Graph Creation

    Get PDF
    Data has exponentially grown in the last years, and knowledge graphs constitute powerful formalisms to integrate a myriad of existing data sources. Transformation functions -- specified with function-based mapping languages like FunUL and RML+FnO -- can be applied to overcome interoperability issues across heterogeneous data sources. However, the absence of engines to efficiently execute these mapping languages hinders their global adoption. We propose FunMap, an interpreter of function-based mapping languages; it relies on a set of lossless rewriting rules to push down and materialize the execution of functions in initial steps of knowledge graph creation. Although applicable to any function-based mapping language that supports joins between mapping rules, FunMap feasibility is shown on RML+FnO. FunMap reduces data redundancy, e.g., duplicates and unused attributes, and converts RML+FnO mappings into a set of equivalent rules executable on RML-compliant engines. We evaluate FunMap performance over real-world testbeds from the biomedical domain. The results indicate that FunMap reduces the execution time of RML-compliant engines by up to a factor of 18, furnishing, thus, a scalable solution for knowledge graph creation

    SRBench: A streaming RDF/SPARQL benchmark

    Full text link
    We introduce SRBench, a general-purpose benchmark primarily designed for streaming RDF/SPARQL engines, completely based on real-world data sets from the Linked Open Data cloud. With the increasing problem of too much streaming data but not enough tools to gain knowledge from them, researchers have set out for solutions in which Semantic Web technologies are adapted and extended for publishing, sharing, analysing and understanding streaming data. To help researchers and users comparing streaming RDF/SPARQL (strRS) engines in a standardised application scenario, we have designed SRBench, with which one can assess the abilities of a strRS engine to cope with a broad range of use cases typically encountered in real-world scenarios. The data sets used in the benchmark have been carefully chosen, such that they represent a realistic and relevant usage of streaming data. The benchmark defines a concise, yet omprehensive set of queries that cover the major aspects of strRS processing. Finally, our work is complemented with a functional evaluation on three representative strRS engines: SPARQLStream, C-SPARQL and CQELS. The presented results are meant to give a first baseline and illustrate the state-of-the-art

    A semantic sensor web for environmental decision support applications

    Get PDF
    Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England
    • …
    corecore